360 research outputs found
FaceQnet: Quality Assessment for Face Recognition based on Deep Learning
In this paper we develop a Quality Assessment approach for face recognition
based on deep learning. The method consists of a Convolutional Neural Network,
FaceQnet, that is used to predict the suitability of a specific input image for
face recognition purposes. The training of FaceQnet is done using the VGGFace2
database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images
with quality information related to their ICAO compliance level. The
groundtruth quality labels are obtained using FaceNet to generate comparison
scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making
it capable of returning a numerical quality measure for each input image.
Finally, we verify if the FaceQnet scores are suitable to predict the expected
performance when employing a specific image for face recognition with a COTS
face recognition system. Several conclusions can be drawn from this work, most
notably: 1) we managed to employ an existing ICAO compliance framework and a
pretrained CNN to automatically label data with quality information, 2) we
trained FaceQnet for quality estimation by fine-tuning a pre-trained face
recognition network (ResNet-50), and 3) we have shown that the predictions from
FaceQnet are highly correlated with the face recognition accuracy of a
state-of-the-art commercial system not used during development. FaceQnet is
publicly available in GitHub.Comment: Preprint version of a paper accepted at ICB 201
A Comparative Evaluation of Heart Rate Estimation Methods using Face Videos
This paper presents a comparative evaluation of methods for remote heart rate
estimation using face videos, i.e., given a video sequence of the face as
input, methods to process it to obtain a robust estimation of the subjects
heart rate at each moment. Four alternatives from the literature are tested,
three based in hand crafted approaches and one based on deep learning. The
methods are compared using RGB videos from the COHFACE database. Experiments
show that the learning-based method achieves much better accuracy than the hand
crafted ones. The low error rate achieved by the learning based model makes
possible its application in real scenarios, e.g. in medical or sports
environments.Comment: Accepted in "IEEE International Workshop on Medical Computing
(MediComp) 2020
Board 399: The Freshman Year Innovator Experience (FYIE): Bridging the URM Gap in STEM
The project focuses on increasing “effective STEM education and broadening participation” in underrepresented minority STEM students at the University of Texas Rio Grande Valley (UTRGV) to successfully face academic and professional challenges, recently exacerbated by the COVID-19 pandemic. The Freshman Year Innovator Experience proposes the development of self-transformation skills in freshman mechanical engineering students to successfully face academic and professional challenges exacerbated by the COVID-19 pandemic while working on two parallel projects of technical design innovation and academic career pathways. The authors will present the work in progress and preliminary results from a pilot implementation of the Freshman Year Innovator Experience. This project is funded by NSF award 2225247
mEBAL: A Multimodal Database for Eye Blink Detection and Attention Level Estimation
This work presents mEBAL, a multimodal database for eye blink detection and
attention level estimation. The eye blink frequency is related to the cognitive
activity and automatic detectors of eye blinks have been proposed for many
tasks including attention level estimation, analysis of neuro-degenerative
diseases, deception recognition, drive fatigue detection, or face
anti-spoofing. However, most existing databases and algorithms in this area are
limited to experiments involving only a few hundred samples and individual
sensors like face cameras. The proposed mEBAL improves previous databases in
terms of acquisition sensors and samples. In particular, three different
sensors are simultaneously considered: Near Infrared (NIR) and RGB cameras to
capture the face gestures and an Electroencephalography (EEG) band to capture
the cognitive activity of the user and blinking events. Regarding the size of
mEBAL, it comprises 6,000 samples and the corresponding attention level from 38
different students while conducting a number of e-learning tasks of varying
difficulty. In addition to presenting mEBAL, we also include preliminary
experiments on: i) eye blink detection using Convolutional Neural Networks
(CNN) with the facial images, and ii) attention level estimation of the
students based on their eye blink frequency
Operational strategies guideline for packed bed thermal energy storage systems
Thermal energy storage (TES) is presented as a proven element to reach a sustainable and efficient management of any thermally driven process. The inherent thermodynamic limitations associated to a thermal system, such as the unavailability of an appropriate heat source, thermal losses or improvable cycle efficiencies, justify the implementation of a TES. In practice, numerous industrial processes present noticeable enhancement opportunities according to the mentioned gaps. In these terms, solar-thermal power production, intensive heat demanding industries (steelmaking, glass, cement production etc.), or compressed air energy storage are representative examples with such optimization potential
- …